17 research outputs found

    Accurate PET Reconstruction from Reduced Set of Measurements based on GMM

    Full text link
    In this paper, we provide a novel method for the estimation of unknown parameters of the Gaussian Mixture Model (GMM) in Positron Emission Tomography (PET). A vast majority of PET imaging methods are based on reconstruction model that is defined by values on some pixel/voxel grid. Instead, we propose a continuous parametric GMM model. Usually, Expectation-Maximization (EM) iterations are used to obtain the GMM model parameters from some set of point-wise measurements. The challenge of PET reconstruction is that the measurement is represented by the so called lines of response (LoR), instead of points. The goal is to estimate the unknown parameters of the Gaussian mixture directly from a relatively small set of LoR-s. Estimation of unknown parameters relies on two facts: the marginal distribution theorem of the multivariate normal distribution; and the properties of the marginal distribution of LoR-s. We propose an iterative algorithm that resembles the maximum-likelihood method to determine the unknown parameters. Results show that the estimated parameters follow the correct ones with a great accuracy. The result is promising, since the high-quality parametric reconstruction model can be obtained from lower dose measurements, and is directly suitable for further processing.Comment: 23 pages, 10 figures, submitted to "Signal Processing" by Elsevie

    ViŔekanalna slijepa dekonvolucija slike zasnovana na inovacijama

    Get PDF
    The linear mixture model (LMM) has recently been used for multi-channel representation of a blurred image. This enables use of multivariate data analysis methods such as independent component analysis (ICA) to solve blind image deconvolution as an instantaneous blind source separation (BSS) requiring no a priori knowledge about the size and origin of the blurring kernel. However, there remains a serious weakness of this approach: statistical dependence between hidden variables in the LMM. The contribution of this paper is an application of the ICA algorithms to the innovations of the LMM to learn the unknown basis matrix. The hidden source image is recovered by applying pseudo-inverse of the learnt basis matrix to the original LMM. The success of this approach is due to the property of the innovations of being more independent and more non-Gaussian than original processes. Our good, consistent simulation and experimental results demonstrate viability of the proposed concept.Linearni model mijeÅ”anja (LMM) se u posljednje vrijeme koristi i za viÅ”ekanalnu reprezentaciju zamućene slike. Na taj se način multivarijantne metode analize podataka, poput analize nezavisnih komponenata (ICA), mogu iskoristiti i za rjeÅ”avanje slijepe dekonvolucije slike bezmemorijskom slijepom separacijom izvora (BSS), a koja ne zahtjeva a priori znanje o veličini i podrijetlu jezgre zamućenja. Ipak, postoji velik nedostatak ovog pristupa: statistička zavisnost između skrivenih varijabli LMM-a. Doprinos ovog rada je primjena ICA algoritama na inovacijama LMM-a u postupku učenja nepoznate bazne matrice. Skrivene izvorne slike se restauriraju primjenom pseudo-inverza naučene bazne matrice na originalni LMM. Uspjeh predloženog pristupa se može zahvaliti svojstvu inovacija da su viÅ”e nezavisne i viÅ”e ne-gausovske od originalnog procesa. Dobri i konzistentni rezultati naÅ”ih simulacija i eksperimenata demonstriraju upotrebljivost predloženog koncepta

    Accurate 2D Reconstruction for PET Scanners based on the Analytical White Image Model

    Full text link
    In this paper, we provide a precise mathematical model of crystal-to-crystal response which is used to generate the white image - a necessary compensation model needed to overcome the physical limitations of the PET scanner. We present a closed-form solution, as well as several accurate approximations, due to the complexity of the exact mathematical expressions. We prove, experimentally and analytically, that the difference between the best approximations and real crystal-to-crystal response is insignificant. The obtained responses are used to generate the white image compensation model. It can be written as a single closed-form expression making it easy to implement in known reconstruction methods. The maximum likelihood expectation maximization (MLEM) algorithm is modified and our white image model is integrated into it. The modified MLEM algorithm is not based on the system matrix, rather it is based on ray-driven projections and back-projections. The compensation model provides all necessary information about the system. Finally, we check our approach on synthetic and real data. For the real-world acquisition, we use the Raytest ClearPET camera for small animals and the NEMA NU 4-2008 phantom. The proposed approach overperforms competitive, non-compensated reconstruction methods.Comment: 37 pages, 16 figure

    Signal Decomposition Methods for Reducing Drawbacks of the DWT

    Get PDF
    Besides many advantages of wavelet transform, it has several drawbacks, e.g. ringing, shift variance, aliasing and lack of directionality. Some of them can be eliminated by using wavelet packet transform, stationary wavelet transform, complex wavelet transform, adaptive directional lifting-based wavelet transform, or adaptive wavelet filter banks that use either L2 or L1 norm. This paper contains an overview of these methods

    Overcoming spatio-angular trade-off in light field acquisition using compressive sensing

    Get PDF
    In contrast to conventional cameras which capture a 2D projection of a 3D scene by integrating the angular domain, light field cameras preserve the angular information of individual light rays by capturing a 4D light field of a scene. On the one hand, light field photography enables powerful post-capture capabilities such as refocusing, virtual aperture, depth sensing and perspective shift. On the other hand, it has several drawbacks, namely, high-dimensionality of the captured light fields and a fundamental trade-off between spatial and angular resolution in the camera design. In this paper, we propose a compressive sensing approach to light field acquisition from a sub-Nyquist number of samples. Using an off-the-shelf measurement setup consisting of a digital projector and a Lytro Illum light field camera, we demonstrate the efficiency of the compressive sensing approach by improving the spatial resolution of the acquired light field. This paper presents a proof of concept with a simplified 3D scene as the scene of interest. Results obtained by the proposed method show significant improvement in the spatial resolution of the light field as well as preserved post-capture capabilities

    Detecting the number of components in a non-stationary signal using the RĆ©nyi entropy of its time-frequency distributions

    Get PDF
    A time-frequency distribution provides many advantages in the analysis of multicomponent non-stationary signals. The simultaneous signal representation with respect to the time and frequency axis defines the signal amplitude, frequency, bandwidth, and the number of components at each time moment. The RĆ©nyi entropy, applied to a time-frequency distribution, is shown to be a valuable indicator of the signal complexity. The aim of this paper is to determine which of the treated time-frequency distributions (TFDs) (namely, the Wigner-Ville distribution, the Choi-Williams distribution, and the spectrogram) has the best properties for estimation of the number of components when there is no prior knowledge of the signal. The optimal RĆ©nyi entropy parameter Ī± is determined for each TFD. Accordingly, the effects of different time durations, bandwidths and amplitudes of the signal components on the RĆ©nyi entropy have been analysed. The concept of a class, when the RĆ©nyi entropy is applied to TFDs, is also introduced

    Centar izvrsnosti za računalni vid

    Get PDF
    Centar izvrsnosti za računalni vid na Fakultetu elektrotehnike i računarstva (FER) osnovan je 2012. godine sa ciljem okupljanja relevantnih istraživača s FER-a i s drugih sastavnica SveučiliÅ”ta. U osnivanju Centra sudjelovalo je sedam sastavnica SveučiliÅ”ta u Zagrebu. Ciljevi Centra su jačanje međunarodne vidljivosti SveučiliÅ”ta u Zagrebu u području računalnog vida, stvaranje kritične mase istraživača za zajednički nastup u većim znanstvenoistraživačkim i razvojnim projektima, poboljÅ”anje kvalitete doktorskih istraživanja u području računalnog vida i poticanje zajedničkog nastupa prema gospodarstvu radi suradnje i transfera tehnologije. U radu je predstavljena motivacija za osnivanje Centra te znanstvene i stručne aktivnosti članova Centra

    Razlaganje signala vremenski promjenjivim i nelinearnim filtarskim slogovima

    No full text
    U ovom se radu predlažu novi načini pojasnog razlaganja signala vremenski promjenjivim i nelinearnim filtarskim slogovima. Konstrukcija predloženih filtarskih slogova s potpunom rekonstrukcijom zasnovana je na ljestvičastoj realizaciji, a izvedena je iz jedne metode projektiranja biortogonalnih wavelet filtarskih slogova. Promjena parametara filtara u slogu vrÅ”i se po zadanim kriterijima, prilagođujući filtre svojstvima analiziranog signala. Iako promjenjivi, opisani filtarski slogovi posjeduju po volji dobra svojstva konvergencije i regularnosti. To omogućuje povezivanje slogova u wavelet stabla ili u stabla wavelet paketa, Å”to je preduvjet učinkovite realizacije. Vremenski promjenjivi i nelinearni filtarski slogovi zadržavaju dobra svojstva waveleta: dobru lokalizaciju u vremenskoj i frekvencijskoj domeni. S druge strane, predloženi se filtarski slogovi prilagođuju svojstvima analiziranog signala, te na taj način približavaju optimalnim filtarskim slogovima. Kako se funkcije razlaganja mijenjaju u svakom koraku, možemo govoriti o waveletima sličnom razlaganju ili o waveletima druge generacije. Vremenski promjenjivi filtarski slogovi pokazuju se pogodnijim za analizu nestacionarnih signala od nepromjenjivih slogova. Ljestvičasta realizacija na kojoj su zasnovani omogućuje i preslikavanje cjelobrojnih signala na cjelobrojne transformacijske koeficijente. Moguće su i različite nelinearne operacije na signalu, kao Å”to su statistički (npr. medijan) filtri, a sve uz zadržavanje svojstava potpune rekonstrukcije. Novi filtarski slogovi konstruirani su metodom projektiranja filtara interpolacijom u vremenskoj domeni. Umjesto projektiranja nepromjenjivih filtarskih slogova, opisani algoritmi u svakom koraku analize određuju nove parametre filtarskog sloga. Filtri u slogu sastoje se od nepromjenjivog i promjenjivog dijela. Zadaća nepromjenjivog dijela je osiguravanje zadovoljavajućih svojstava konvergencije i regularnosti, odnosno dovoljnog broja nul-momenata pridruženih funkcija skale i wavelet funkcija. Nepromjenjivi dio filtara osigurava i željene granice frekvencijskih karakteristika filtara, te ograničava područje adaptacije. Promjenjivi dio filtarskog sloga konstruiran je tako da omogući promjenu parametara u zadanom prostoru adaptacije, uz prilagodbu svojstava filtara analiziranim signalima. Za realizaciju promjenjivog i nepromjenjivog dijela razrađen je novi način faktorizacije biortogonalnih filtara ljestvičastom realizacijom. Predložena faktorizacija pokazuje znatan stupanj neovisnosti između koraka podizanja i dualnog koraka podizanja, a time i svojstava niskopropusnog i visokopropusnog filtra u slogu. Nadalje, predložena je faktorizacija pogodna za određivanje potrebnog reda filtara na svakoj razini razlaganja signala. Ta je činjenica iskoriÅ”tena za konstrukciju novih wavelet filtarskih slogova s promjenjivim brojem nul-momenata. Takvi filtarski slogovi pokazuju značajne prednosti u odnosu na fiksne slogove: rezidualni signal nakon interpolacije je manji i to bez izraženih prijelaznih pojava na singularitetima analiziranog signala. U radu su uspoređivani algoritmi prilagodbe filtara, s obzirom na kriterij i područje prilagodbe, te brzinu konvergencije. KoriÅ”teni su kriteriji minimalne apsolutne pogreÅ”ke i minimalne kvadratne pogreÅ”ke, koji su pogodni za učinkovitu implementaciju. Područje prilagodbe određuje brzinu adaptacije, koja je obrnuto proporcionalna varijanci ocjene filtarskih parametara, te određuje stupanj približavanja optimalnim filtarskim slogovima. Pozicija intervala prilagodbe u odnosu na promatrani trenutak također određuje stupanj približavanja optimalnim razlaganjima. Pokazuje se da je jedino kod kauzalnih, izrazito asimetričnih intervala adaptacije, moguća potpuna rekonstrukcija signala iz wavelet koeficijenata bez potrebe za prijenosom informacije o filtrima na stranu razlaganja. Nepromjenjivi dio filtarskog sloga uzrokom je potrebe za spektralnom kompenzacijom kriterija prilagodbe, za Å”to su razvijeni potrebni algoritmi. Zaključno, predloženi vremenski promjenjivi i nelinearni filtarski slogovi donose novi pristup u razlaganje signala. U analitičkom smislu, opisani slogovi nude dodatnu informaciju o signalu, korisnu za prepoznavanje njegovih sastavnih komponenti. Oni objedinjuju pojasno razlaganje, parametarsko modeliranje i multirezolucijsku analizu signala. Novi filtarski slogovi omogućuju brojne primjene, poput kompresije i potiskivanja Å”uma, a uz manju degradaciju svojstava signala u odnosu na fiksne filtarske slogove.In this thesis, a new technique for signal subband decomposition using time-variant and non-linear filter banks is proposed. Construction of the proposed perfect reconstruction filter banks is based on ladder structure, derived from a method of biorthogonal wavelet filter bank design. Filterā€™s parameters are being changed by predefined criteria, in purpose to adapt filters to the signal properties. In spite of their time-variance, described filter banks posses good convergence and regularity properties. It makes possible to form wavelet trees or wavelet packets, which leads to efficient realizations. Resulting time-variant and non-linear filter banks still retain good properties of wavelets: good localization in time and frequency domain. On the other hand, proposed filter banks have the ability to adapt to the properties of the analyzed signal, resulting in a good approximation of the optimal filter banks. Since decomposition functions are changed at each step, this technique represents a new class of wavelet-like transforms or second-generation wavelets. It is shown that time-variant filter banks are more suitable for analysis of non-stationary signals, compared to the fixed filter banks. Employed ladder structure enables mapping of integer signals to the integer transformation coefficients. Also, it enables different non-linear operations on signal, such us statistical signal processing (e.g. median filter), and many others. Despite of non-linearity, the proposed filter banks retain the perfect reconstruction property. Proposed filter banks are designed by the interpolation of samples in the time domain. Instead of fixed filter bank design, described algorithms determine the filter parameters at each step of the analysis. Filters in the bank consist of a fixed and a variant part. The role of the fixed part is to ensure the satisfactory convergence and regularity properties, corresponding to the number of zero moments of wavelet and scale functions. Furthermore, the fixed part provides limits on filtersā€™ frequency responses and defines the adaptation space. The variable part of the filter bank enables changes of the filtersā€™ parameters within the allowed adaptation space, in order to adapt the filtersā€™ properties to the analyzed signal. A new factorization of biorthogonal filters into lifting steps is proposed to enable filter separation into a fixed and a variable part. There is a significant level of independence between lifting step and dual lifting step in the proposed factorization, which results in a certain degree of independence between low-pass and high-pass filters in the bank. Furthermore, proposed factorization is suitable for determination of necessary filter order at each level of decomposition. That fact is used for the construction of new wavelet filter banks with variable number of zero moments. Such filter banks show significant benefits when compared to the fixed banks: residual signal after interpolation is smaller, with no amplified transients on signal singularities, which is typical for the high order fixed filters. In this thesis, the algorithms of filter adaptation were compared based on criterion and adaptation interval, as well as the convergence properties. Criteria of the minimum absolute error and the minimum square error were used, due to their efficient implementation. The adaptation interval determines the speed of adaptation, as well as the degree of optimum filter bank approximation. Position of the adaptation area with reference to the observed moment also influences the degree of approximation. It is shown that only causal and asymmetric adaptation intervals can enable the perfect reconstruction without the need for filter parameter transfer to the reconstruction side. Fixed part of the filter bank causes the additional requirement for spectral compensation of the adaptation criterion. All necessary algorithms for spectral compensation of the error were developed in the thesis. Consequently, proposed time-variant and non-linear filter banks represent a new approach to signal decomposition. In the analytical sense, described filter banks offer new information about the decomposed signal, useful for the extraction of its basic components. They combine subband decomposition, parametric modeling and multiresolution analysis of the signal. The proposed new filter banks enable a number of applications, like compression and noise suppression resulting with lower signal degradation then for the case of fixed filter banks
    corecore